10 research outputs found

    Memory sharing for interactive ray tracing on clusters

    Get PDF
    ManuscriptWe present recent results in the application of distributed shared memory to image parallel ray tracing on clusters. Image parallel rendering is traditionally limited to scenes that are small enough to be replicated in the memory of each node, because any processor may require access to any piece of the scene. We solve this problem by making all of a cluster's memory available through software distributed shared memory layers. With gigabit ethernet connections, this mechanism is sufficiently fast for interactive rendering of multi-gigabyte datasets. Object- and page-based distributed shared memories are compared, and optimizations for efficient memory use are discussed

    Distributed interactive ray tracing for large volume visualization

    Get PDF
    Journal ArticleWe have constructed a distributed parallel ray tracing system that interactively produces isosurface renderings from large data sets on a cluster of commodity PCs. The program was derived from the SCI Institute's interactive ray tracer (*-Ray), which utilizes small to large shared memory platforms, such as the SGI Origin series, to interact with very large-scale data sets. Making this approach work efficiently on a cluster requires attention to numerous system-level issues, especially when rendering data sets larger than the address space of each cluster node

    Memory-savvy distributed interactive ray tracing

    Get PDF
    Journal ArticleInteractive ray tracing in a cluster environment requires paying close attention to the constraints of a loosely coupled distributed system. To render large scenes interactively, memory limits and network latency must be addressed efficiently. In this paper, we improve previous systems by moving to a page-based distributed shared memory layer, resulting in faster and easier access to a shared memory space. The technique is designed to take advantage of the large virtual memory space provided by 64-bit machines. We also examine task reuse through decentralized load balancing and primitive reorganization to complement the shared memory system. These techniques improve memory coherence and are valuable when physical memory is limited. C-SAF

    Distributed Interactive Ray Tracing for Large Volume Visualization

    No full text
    Figure 1: Richtmyer-Meshkov Instability time steps:0,45,180,270. With 32 Linux PCs we are able to isosurface the full resolution 7.5 GB volume on the left at 6.7 frames per second and on the right at 2.1 frames per second. We have constructed a distributed parallel ray tracing system that interactively produces isosurface renderings from large data sets on a cluster of commodity PCs. The program was derived from the SCI Institute’s interactive ray tracer (*-Ray), which utilizes small to large shared memory platforms, such as the SGI Origin series, to interact with very large-scale data sets. Making this approach work efficiently on a cluster requires attention to numerous system-level issues, especially when rendering data sets larger than the address space of each cluster node. The rendering engine is an image parallel ray tracer with a supervisor/workers organization. Each node in the cluster runs a multi-threaded application. A minimal abstraction layer on top of TCP links the nodes, and enables asynchronous message handling. For large volumes, render threads obtain data bricks on demand from an object-based software distributed shared memory. Caching improves performance by reducing the amount of data transfers for a reasonable working set size. For large data sets, the cluster-based interactive ray tracer performs comparably with an SGI Origin system. We examine the parameter space of the renderer and provide experimental results for interactive rendering of large (7.5 GB) data sets

    uvcdat: UV-CDAT 2.6

    No full text
    The UV-CDAT team is pleased to announce the release of UV-CDAT version 2.6. DOI Change log is here Many thanks to users, testers, and developers for helping UV-CDAT to reach this milestone. This is a bug fix release, we have fixed several major and minor bugs in version 2.6 and therefore we strongly recommend users upgrade their UV-CDAT installation. From this release on UV-CDAT is distributed via conda conda install -c uvcdat uvcdat or conda create -n uvcdat-2.6 -c uvcdat uvcdat We also alert users to an Askbot website to help the UV-CDAT user community. This supports version 2.2 onward. See: http://uvcdat.askbot.co
    corecore